XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Peter Orbanz

 

Tuesday 19th March 2019

 

Time: 4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Sampling and embedding network data

Graph embeddings represent each vertex in a graph by a point in a Euclidean space, and are used as a form of representation learning for network data. Roughly speaking, points that are close represent vertices that are "similar". I will explain how such embeddings can be formulated as empirical risk minimization. The definition of the empirical risk has to take into account how the graph is sampled; by choosing the right combination of sampling method, loss, and predictor, one can recover well-known embedding methods. The empirical risk formulation leads to fast implementations using automatic differentiation. It is also sufficiently precise to be analyzed mathematically, and I will sketch what we can prove about embeddings, why obtaining theoretical results can be hard, and what techniques we have developed so far to obtain them.